SignGD with error feedback meets lazily aggregated technique: Communication-efficient algorithms for distributed learning

نویسندگان

چکیده

The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems. However, the communication overhead is a major bottleneck that hampers scalability systems. In this paper, we design two communication-efficient tasks. first one named EF-SIGNGD, which use 1-bit (sign-based) gradient quantization method save bits. Moreover, error feedback technique, i.e., incorporating made by compression operator into next step, employed convergence guarantee. second algorithm called LE-SIGNGD, introduce well-designed lazy aggregation rule EF-SIGNGD can detect gradients with small changes and reuse outdated information. LE-SIGNGD saves costs both transmitted bits rounds. Furthermore, show convergent under some mild assumptions. effectiveness proposed demonstrated through experiments on real synthetic data.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Communication-Efficient Algorithms For Distributed Optimization

This thesis is concerned with the design of distributed algorithms for solving optimization problems. The particular scenario we consider is a network with P compute nodes, where each node p has exclusive access to a cost function fp. We design algorithms in which all the nodes cooperate to find the minimum of the sum of all the cost functions, f1 + · · · + fP . Several problems in signal proce...

متن کامل

Error Exponents for Distributed Detection with Feedback'

We investigate the effects of feedback on a decentralized detection system consisting of N sensors and a detection center. It is assumed that observations are independent and identically distributed across sensors, and that each sensor compresses its observations into a fixed number of quantization levels. We consider two variations on this setup. One entails the transmission of sensor data to ...

متن کامل

Learning implied constraints lazily

Explanations are a technique for reasoning about constraint propagation, which have been applied in many learning, backjumping and user interaction algorithms. To date, explanations have been recorded eagerly when the propagation happens, which leads to ine cient use of time and space, because many will never be used. In this paper we show that it is possible and highly e ective to calculate ex...

متن کامل

General and Robust Communication-Efficient Algorithms for Distributed Clustering

As datasets become larger and more distributed, algorithms for distributed clustering have become more and more important. In this work, we present a general framework for designing distributed clustering algorithms that are robust to outliers. Using our framework, we give a distributed approximation algorithm for k-means, k-median, or generally any `p objective, with z outliers and/or balance ...

متن کامل

Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis

We study the fundamental problem of Principal Component Analysis in a statistical distributed setting in which each machine out of m stores a sample of n points sampled i.i.d. from a single unknown distribution. We study algorithms for estimating the leading principal component of the population covariance matrix that are both communication-efficient and achieve estimation error of the order of...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Tsinghua Science & Technology

سال: 2022

ISSN: ['1878-7606', '1007-0214']

DOI: https://doi.org/10.26599/tst.2021.9010045